index

What struck me the most from this weeks readings was the study by Birhane et al., it highlighted that these models are using datasets that contain a high proportion of sensitive or dangerous content, including violent content. Considering the recent news story of a young teenager killing themselves at the behest of a CharacterAI personality, it seems that people being harmed in real life would almost be the logical consequence of these datasets not being curated as well as they could have, or having the guardrails in place that they should.

Because AI is overwhelmingly used with a capitalist ethos in mind, it seems that AI (LLM's in their current form) and how they are used and created, inherently exploit the proletariat and other marginalized peoples (those with mental health differences, racialized folks, queer folks, etc.). There seems to always be a human cost associated with the development and selling of these models.

Tech companies have used their vast resources to control the narrative and stop regulative forces such as the government from mitigating their exploitative uses of LLM's. They are not being held to the same standard that academics and other official sources of academic information are.

It seems that universities have realized that they can make money by selling student data to brokers and engage in selling it to AI models without getting proper consent from the students, professors, teaching assistants, and other staff who created the data (including scholarly papers, articles, books, and other materials that took a large amount of research, time, and effort to create).

AI has already shown that it has the ability to coopt social justice movements such as "FreePalestine" and effectively whitewash people's mental images of the Palestinian Genocide by creating the "All Eyes on Rafah" image, a tasteful, aesthetic image that does not properly showcase the horror of the situation, which dominated people's Instagram feeds for at least a day.